western blot
Catching scientfic fraud could get a lot harder thanks to AI • The Register
Feature Generative AI poses interesting challenges for academic publishers tackling fraud in science papers as the technology shows the potential to fool human peer review. Describe an image for DALL-E, Stable Diffusion, and Midjourney, and they'll generate one in seconds. These text-to-picture systems have rapidly improved over the past few years, and what initially began as a research prototype, producing benign and wonderfully bizarre illustrations of baby daikon radishes walking dogs in 2021, has since morphed into commercial software, built by billion-dollar companies, capable of generating increasingly realistic images. These AI models can produce lifelike pictures of human faces, objects, and scenes, and it's a matter of time before they get good at creating convincing scientific images and data, too. Text-to-image models are now widely accessible, pretty cheap to use, and they could help dodgy scientists forge results and publish sham research more easily.
AI software helps bust image fraud in academic papers
Scientific publishers such as the American Association for Cancer Research (AACR) and Taylor & Francis have begun attempting to detect fraud in academic paper submissions with an AI image-checking program called Proofig, reports The Register. Proofig, a product of an Israeli firm of the same name, aims to help use "artificial intelligence, computer vision and image processing to review image integrity in scientific publications," according to the company's website. During a trial that ran from January 2021 to May 2022, AACR used Proofig to screen 1,367 papers accepted for publication, according to The Register. Of those, 208 papers required author contact to clear up issues such as mistaken duplications, and four papers were withdrawn. In particular, many journals need help detecting image duplication fraud in Western blots, which are a specific style of protein-detection imagery consisting of line segments of various widths.
- Law Enforcement & Public Safety > Fraud (0.39)
- Health & Medicine > Therapeutic Area (0.39)
Publishers use AI to catch bad scientists doctoring data
Analysis Shady scientists trying to publish bad research may want to think twice as academic publishers are increasingly using AI software to automatically spot signs of data tampering. Duplications of images, where the same picture of a cluster of cells, for example, is copied, flipped, rotated, shifted, or cropped is, unfortunately, quite common. In cases where the errors aren't accidental, the doctored images are created to look as if the researchers have more data and conducted more experiments then they really did. Image duplication was the top reason papers were retracted for the American Association for Cancer Research (AACR) over 2016 to 2020, according to Daniel Evanko, the company's Director of Journal Operations and Systems. Having to retract a paper damages the authors and the publishers' reputation.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Asia > Middle East > Israel (0.05)
Forensic Analysis of Synthetically Generated Scientific Images
Mandelli, Sara, Cozzolino, Davide, Cardenuto, Joao P., Moreira, Daniel, Bestagini, Paolo, Scheirer, Walter, Rocha, Anderson, Verdoliva, Luisa, Tubaro, Stefano, Delp, Edward J.
The widespread diffusion of synthetically generated content is a serious threat that needs urgent countermeasures. The generation of synthetic content is not restricted to multimedia data like videos, photographs, or audio sequences, but covers a significantly vast area that can include biological images as well, such as western-blot and microscopic images. In this paper, we focus on the detection of synthetically generated western-blot images. Western-blot images are largely explored in the biomedical literature and it has been already shown how these images can be easily counterfeited with few hope to spot manipulations by visual inspection or by standard forensics detectors. To overcome the absence of a publicly available dataset, we create a new dataset comprising more than 14K original western-blot images and 18K synthetic western-blot images, generated by three different state-of-the-art generation methods. Then, we investigate different strategies to detect synthetic western blots, exploring binary classification methods as well as one-class detectors. In both scenarios, we never exploit synthetic western-blot images at training stage. The achieved results show that synthetically generated western-blot images can be spot with good accuracy, even though the exploited detectors are not optimized over synthetic versions of these scientific images.
- Europe > Italy > Campania > Naples (0.04)
- South America > Brazil (0.04)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- (4 more...)
- Government (0.68)
- Health & Medicine (0.66)
- Information Technology > Security & Privacy (0.50)
How a Sharp-Eyed Scientist Became Biology's Image Detective
In June of 2013, Elisabeth Bik, a microbiologist, grew curious about the subject of plagiarism. She had read that scientific dishonesty was a growing problem, and she idly wondered if her work might have been stolen by others. One day, she pasted a sentence from one of her scientific papers into the Google Scholar search engine. She found that several of her sentences had been copied, without permission, in an obscure online book. She pasted a few more sentences from the same book chapter into the search box, and discovered that some of them had been purloined from other scientists' writings.
- North America > United States > California (0.05)
- Europe > Netherlands (0.05)
- Asia > China (0.05)
- Information Technology > Information Management > Search (0.55)
- Information Technology > Communications (0.50)
- Information Technology > Artificial Intelligence (0.35)